Goto

Collaborating Authors

 path integration




1 Theoreticalanalysis 1.1 Graphicalillustrationsofkeyequations Fig. 1illustrateskeyequationsinthemaintextaswellasinthesupplementarymaterials. (a)physicalspace (b)neuralspace

Neural Information Processing Systems

The biggerµ is,thebetter the error correction. For the set of( x) that form a group, a matrix representationM( x) is equivalent to another representation M( x)if there exists an invertible matrixP such that M( x)=PM( x)P 1 for each x. A matrix representation is reducible if it is equivalent to a block diagonal matrix representation, i.e., we can find a matrixP, such thatPM( x)P 1 is block diagonal for every x. IfM is block-diagonal,M =diag(Mk,k=1,...,K), with nonequivalentblocks,andeachblock Mkcannotbefurtherreduced,thenthematrixelements (Mkij( x)) are orthogonal basis functions of x. Such orthogonality relations are proved by Schur [15] for finite group, and by Peter-Weyl for compact Lie group [13].







5 Supplementary Material

Neural Information Processing Systems

Here we give a general decription of the model. Complete versions of the dendritic update rules (summarised in Eqns (2) & (3)) are given below. The notation we're using admits the possible presence of biases as well as the weights (though biases typically aren't's could be added to the synaptic inputs effectively absorbing a We update the layers from bottom to top: first we update the latent or "environment" and increment Learning rules are conceptually summarised by the equations given in the main text, Eqn (6). If the prediction error and one of the presynaptic inputs are both consistently large (i.e. over a We add synaptic noise to the dendritic activations. Hz in order that noise is relatively slow and weak.